Physiological Genomics
● American Physiological Society
Preprints posted in the last 7 days, ranked by how well they match Physiological Genomics's content profile, based on 15 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit.
Zhang, P.
Show abstract
BackgroundPreterm birth is one of the most significant etiologies for neonatal morbidity and mortality. Preterm delivery is classified as iatrogenic preterm delivery and spontaneous preterm delivery. The role of placental pathology is studied. Materials and methodsWe have previously collected placental pathology data with maternal pregnancy and neonatal birth data, and we investigated the role of placental pathology in preterm delivery. Preterm delivery was categorized as late preterm (34-36 weeks), moderate preterm (32 to 33 weeks), and extreme preterm (less than 32 weeks). Neonatal, maternal, placental gross and histologic features, and laboratory parameters were compared across groups using chi-square tests for categorical variables and Kruskal-Wallis tests for continuous variables using various programs in R-package. ResultsTotally 3723 singleton placentas including 3307 term (88.8%) and 416 preterm placentas (11.2%) were examined with maternal pregnancy data and neonatal birth data. There were 614 placentas from patients with preeclampsia/pregnancy induced hypertension (PRE/PIH) (16.5%). Preterm delivery showed significantly lower fetal birth weight, placental weight, and fetal-placental ratio (all p<0.01). Maternal Black race was more prevalent in preterm groups (up to 50.8% in extreme preterm vs. 33.2% in term, p<0.01). Preterm delivery was statistically associated with PRE/PIH and maternal vascular malperfusion (MVM), maternal and fetal inflammatory response (MIR and FIR), and increased pre-delivery white blood count (WBC). Extreme preterm deliveries were markedly associated with intrauterine fetal death (27.5%, p<0.01) and MIR/FIR (56.7%, p<0.01). After excluding PRE/PIH patients, preterm delivery was statistically associated with MIR/FIR and increased WBC. ConclusionsDistinct clinicopathologic profiles exist across preterm subcategories, with MVM predominating in late/moderate preterm and severe pathologic features (including fetal demise and acute inflammation) in extreme preterm. These findings highlight heterogeneous etiologies of preterm delivery.
Berg, N. K.; Kerchberger, V. E.; Pershad, Y.; Corty, R. W.; Bick, A. G.; Ware, L. B.
Show abstract
Rationale: Sepsis is a life-threatening syndrome causing significant morbidity and mortality especially in the aging population. Clonal hematopoiesis of indeterminate potential (CHIP) is an age-related condition of clonal expansion of hematopoietic stem cells harboring somatic mutations associated with increased incidence of chronic illness and all-cause mortality. Objective: Evaluate the association of pre-illness CHIP with mortality and morbidity in patients admitted to the ICU with sepsis. Methods: We performed a retrospective study using a de-identified electronic health record linked with a DNA biorepository. We identified adult patients with sepsis who had DNA collected prior to ICU admission. We tested the association between CHIP status, determined from whole-genome sequencing, and ICU mortality, organ support-free days, and long-term survival adjusting for age, sex, race and Sequential Organ Failure Assessment (SOFA) score on ICU admission. Measurements and Main Results: Pre-illness CHIP was associated with increased sepsis mortality (OR = 1.54, 95% CI 1.13 to 2.07, P = 0.005) and fewer days alive and free of organ support (-1.7 days, 95% CI -3.2 to -0.2, P = 0.028) after adjusting for age, sex, race, and SOFA score. In sepsis survivors, CHIP was also associated with increased long-term mortality after discharge (HR 1.40, 95% CI 1.01 to 1.93, P = 0.041). Conclusions: Pre-illness CHIP was independently associated with increased mortality and morbidity in critically-ill adults with sepsis. These findings suggest that CHIP is a risk factor for sepsis severity. Elucidating the mechanism underlying this association could uncover new therapeutic interventions for sepsis.
Sato, T.; Ishiseki, M.; Kataoka, Y.; Someko, H.; Sato, H.; Minami, K.; Kaneko, T.; Takeda, H.; Crosby, A.
Show abstract
ObjectivesAlarm fatigue is a patient safety concern in ICUs, yet no validated instrument exists to assess alarm fatigue among healthcare professionals in non-Western settings. This study aimed to cross-culturally adapt the Charite Alarm Fatigue Questionnaire (CAFQa) into Japanese and evaluate its reliability and validity among ICU nurses and physicians. MethodsThe Japanese CAFQa was cross-culturally adapted following the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) guidelines, including forward translation, back-translation, expert panel review, and cognitive interviews. A multicenter cross-sectional validation study was performed across eight ICUs at five hospitals in Japan. A total of 129 participants (103 nurses and 26 physicians) completed the Japanese CAFQa, the NIOSH Brief Job Stress Questionnaire, and the Insomnia Severity Index (ISI). Structural validity, internal consistency, test-retest reliability (n = 102), convergent validity, and known-groups validity were assessed. ResultsCFA confirmed the two-factor structure with acceptable fit (CFI = 0.922, RMSEA = 0.041, SRMR = 0.076), with standardized factor loadings ranging from 0.33 to 0.82. The two factors were not correlated (r = 0.05). Cronbachs alpha was 0.688 for the overall scale, 0.805 for Alarm Stress, and 0.649 for Alarm Coping. Test-retest ICCs ranged from 0.616 to 0.753. The CAFQa total score correlated with the NIOSH total (r = 0.261) and the ISI total (r = 0.338). Healthcare professionals with [≥]4 years of ICU experience had higher Alarm Coping scores than those with 1-3 years (median 7.0 vs 6.5), and physicians scored higher on Alarm Coping than nurses (median 8.0 vs 7.0). ConclusionsThe Japanese CAFQa demonstrated acceptable structural validity, reliability, and convergent and known-groups validity, providing the first validated tool for quantitatively measuring alarm fatigue in Japan. Implications for Clinical PracticeThe Japanese CAFQa enables ICU managers to quantify alarm fatigue at individual and unit levels, identify high-risk staff, and evaluate the effectiveness of alarm management interventions.
Sood, R.; Hevelone, N. D.; Davidsson, O. B.; Kristjansson, R. P.; Phillips, B. D.; Lantis, J. C.; Johannsson, G.
Show abstract
Abstract Objective: The objective of this study was to compare hospital length of stay and other clinical outcomes between intact fish skin graft (IFSG; Graftguide, Kerecis, Arlington, VA) and synthetic/biosynthetic dermal substitutes (SSS; Integra Dermal Regeneration Template and NovoSorb Biodegradable Temporizing Matrix) in propensity score matched burn patients using the American Burn Association Burn Care Quality Platform. Methods: This retrospective cohort study identified adult patients treated with a single dermal substitute product during hospitalization for acute burn injury. Patients receiving IFSG (n = 93) were matched 1:4 to patients receiving SSS (n = 372) using nearest neighbor propensity score matching on the logit scale. Matching covariates included total body surface area burned (TBSA), patient age, sex), burn severity classification, inhalation injury, and trauma diagnosis. The primary outcome was hospital length of stay (LOS), analyzed using a gamma generalized linear mixed model (GLMM). Secondary outcomes included the incidences of sepsis, graft loss, venous thromboembolism (VTE), and hospital acquired pressure injury (HAPI). A prespecified sensitivity analysis was performed using a broader mixed product cohort. Results: A total of 93 IFSG treated patients from 17 burn centers admitted between the years 2019 and 2025 were matched 1:4 to 372 SSS treated patients from 44 centers. Unadjusted mean LOS was 24.1 days (median 20, IQR 11 to 32) in the IFSG treated group and 36.7 days (median 31, IQR 17 to 52) in the SSS treated group representing a 12.6 day reduction. GLMM-adjusted estimated marginal mean LOS was 24.2 days (95% CI, 20.0 to 29.4) for IFSG versus 33.5 days (95% CI, 30.0 to 37.6) for SSS (ratio 0.723; p = 0.00245), representing a 9.3 day reduction. Sepsis (1.1% vs 4.6%), graft loss (3.2% vs 8.3%), VTE (2.2% vs 2.7%), and HAPI (2.2% vs 3.8%) were all numerically lower in the IFSG treated arm; although GLMM-adjusted odds ratios were not statistically significant for any individual complication. The mixed cohort sensitivity analysis (n = 229 IFSG vs 458 SSS across 67 centers) confirmed the primary finding with GLMM adjusted LOS ratio 0.716 (p = 0.0001). Conclusions: In this propensity score matched analysis of the ABA registry, IFSG was associated with a statistically significant and clinically meaningful reduction in hospital length of stay compared with synthetic/biosynthetic dermal substitutes, in requiring dermal substitution and autografting, with all complication rates, sepsis, graft loss, VTE, and HAPI, numerically lower in the IFSG-treated arm. The shorter hospitalization was not achieved at the expense of safety. These findings support IFSG as a viable alternative to synthetic dermal substitutes in burns requiring dermal substitution and autografting. Prospective studies are warranted particularly in larger burns requiring staged reconstruction.
Masip, G.; Drouard, G.; Kaprio, J.
Show abstract
Introduction: Eating behaviors are consistently associated with weight-related traits, yet the biological factors contributing to individual differences in these behaviors remain poorly characterized. Plasma proteomics offers an opportunity to investigate the biological processes underlying eating behaviors. Methods: Participants were 730 young adult twins from the FinnTwin12 cohort. Eating behaviors were measured through self-report questionnaires, including the Three-Factor Eating Questionnaire-R18 and four additional items on eating styles. Associations between plasma proteins and eating behaviors were examined using generalized estimating equation models adjusted for age and sex, with additional analyses adjusting for body mass index (BMI). Within-pair analyses were conducted in both monozygotic (MZ) and dizygotic twin pairs to assess whether associations were influenced by genetic or environmental factors. Results: We identified 51 significant protein-eating behavior associations involving 35 unique proteins (FDR <0.05). We observed 19 associations for the item "overeating when feeling down" and 12 for the TFEQ factor of emotional eating. The identified proteins were predominantly enriched in immune system pathways, including the complement cascade and adaptive immune signaling. After further adjustment for BMI, 12 associations persisted, most of which were associated with eating-style items, suggesting that BMI had a substantial influence on protein-eating behavior associations. Within-pair analyses of MZ pairs indicated that several associations persist after accounting for genetic effects. Conclusion: Our study identifies plasma proteins associated with eating behaviors, largely involving immune-related pathways. While some associations attenuated in twin analyses, several persisted, suggesting environmental influences. These results highlight potential biomarker candidates and indicate that modifiable environmental factors may contribute to the proteomic profiles associated with eating behaviors, with possible implications for weight-related traits.
Huang, C.-H. S.; Kuehne, L. M.; Jacuzzi, G.; Olden, J. D.; Seto, E.
Show abstract
Military aviation training noise remains understudied despite its widespread impacts across urban, rural, and wilderness areas. The predominance of low-frequency noise and repetitive training can create pervasive noise pollution, yet past research often fails to capture the full range of health and quality-of-life effects. This study analyzed two complaint datasets related to Whidbey Island Naval Air Station noise: U.S. Navy records (2017-2020) and Quiet Skies Over San Juan County data (2021-2023). We analyzed and mapped sentiment intensity from noise complaints relative to modeled annual noise exposure, developed a typology to classify impacts, and modeled the environmental and operational factors influencing complaints. Findings revealed widespread negative sentiment and anger, often beyond the bounds of estimated noise contours, suggesting that annual cumulative noise models inadequately estimate community impacts. Complaints consistently highlighted sleep disturbance, hearing and health concerns, and compromised home environments due to shaking, vibration, and disruption of daily life. Residents also reported significant social, recreational, and work disruptions, along with feelings of fear, helplessness, and concern for children's well-being. The number of complaints were strongly associated with training schedules, with late-night sessions being the strongest predictor. A delayed response pattern suggests residents reach a frustration threshold before filing complaints. Overall, our findings demonstrate persistent negative sentiment and diverse impacts from military aviation noise. Results highlight the need for improved noise metrics, modeling and operational adjustments to mitigate the most disruptive effects.
Ogaki, S.; Kaneda, M.; Nohara, T.; Fujita, S.; Osako, N.; Yagi, T.; Tomita, Y.; Ogata, T.
Show abstract
Study ObjectivesTo evaluate wearable sleep staging across sleep apnea severity, including very severe sleep apnea defined as an apnea-hypopnea index (AHI)[≥] 50 events/h, and to assess how training-set composition affects performance in this subgroup. MethodsWe analyzed 552 overnight recordings, 318 from the Sleep Lab Dataset and 234 from the Hospital Dataset. In the Hospital Dataset, 26.5% had very severe sleep apnea. We developed a deep learning model for sleep staging using RR intervals from wrist-worn photoplethysmography and three-axis accelerometry. Baseline performance was assessed by cross-validation under 5-stage and 4-stage staging. We examined night-level associations with AHI severity. We also compared the baseline model with an ablation model trained on the same number of recordings but with more Sleep Lab Dataset and lower-AHI Hospital Dataset recordings, evaluating both models in the very severe subgroup. ResultsIn 5-stage classification, Cohens kappa was 0.586 in the Sleep Lab Dataset and 0.446 in the Hospital Dataset. Under 4-stage staging, the gap narrowed, with kappa values of 0.632 and 0.525, respectively. In the Hospital Dataset, performance declined with increasing AHI severity. Among 62 recordings with very severe sleep apnea, reducing high-AHI representation in training lowered kappa from 0.365 to 0.303. ConclusionsWearable sleep staging performance declined across greater sleep apnea severity in this clinical cohort. Clinical utility may benefit from training data that better represent the target severity spectrum and from selecting staging granularity to match the intended use case. Statement of SignificanceRepeated laboratory polysomnography is impractical for long-term sleep apnea management. Wearable sleep staging could support scalable monitoring, yet its reliability in clinically severe sleep apnea has remained unclear. This study developed and evaluated a wearable sleep staging approach in both sleep-laboratory and hospital cohorts. The hospital cohort included many severe and very severe cases. Performance was lower in the hospital cohort and declined with greater sleep apnea severity. A coarser staging scheme reduced the gap between cohorts, and models trained without representative very severe cases performed worse in this target population. These findings highlight the value of severity-aware model development and motivate future multi-night home validation with reliability cues.
Jacobsen, A. M.; Quednow, B. B.; Bavato, F.
Show abstract
ImportanceBlood neurofilament light chain (NfL) and glial fibrillary acidic protein (GFAP) are entering clinical use in neurology as markers of neuroaxonal and astrocytic injury, but their utility in psychiatry is unclear. ObjectiveTo determine whether psychiatric diagnoses are associated with altered plasma NfL and GFAP levels. Design, Setting, and ParticipantsThis population-based study examined plasma NfL and GFAP among 47,495 participants from the UK Biobank (54.0% female; 93.5% White; mean [SD] age 56.8 [8.2] years) who provided blood samples and sociodemographic and clinical data between 2006 and 2010. Normative modeling was applied to assess associations between 7 lifetime psychiatric diagnostic categories and deviations from expected NfL and GFAP levels, while accounting for neurological diagnoses, cardiometabolic burden, and substance use. Data were analyzed between July 2025 and March 2026. Main Outcomes and MeasuresDeviations in plasma NfL and GFAP levels from normative predictions. ResultsRelative to the reference population, plasma NfL levels were higher among individuals with bipolar disorder (d=0.20; 95% CI, 0.03-0.37; p=0.03), recurrent depressive disorder (d=0.23; 95% CI, 0.07-0.38; p=0.009), and depressive episodes (d=0.06; 95% CI, 0.02-0.10; p=0.01), lower among individuals with anxiety disorders (d=-0.07; 95% CI, -0.12 to -0.02; p=0.008), but did not differ in schizophrenia spectrum, stress-related, or other psychiatric disorders. Plasma GFAP levels were not elevated in any psychiatric disorders. Variability in NfL levels was greater among individuals with schizophrenia spectrum disorders (variance ratio [VR]=1.30; p=0.005), depressive episodes (VR=1.06; p=0.006), and anxiety disorders (VR=1.08; p=0.005). Variability in GFAP levels was increased only in anxiety disorders (VR=1.08; p=0.01). Plasma NfL levels exceeding percentile-based normative thresholds were more common among individuals with schizophrenia spectrum disorders, bipolar disorder, recurrent depressive disorder, and depressive episodes. Neurological diagnoses, cardiometabolic burden, and substance use were associated with plasma NfL and GFAP levels. Conclusions and RelevanceThis study provides population-level evidence of plasma NfL elevation in bipolar and depressive disorders and increased variability in schizophrenia spectrum, bipolar and depressive disorders, supporting its potential as a biomarker in psychiatry and informing its ongoing neurological applications. Plasma GFAP levels, in contrast, were largely unaltered across psychiatric disorders. Key PointsO_ST_ABSQuestionC_ST_ABSAre plasma neurofilament light chain (NfL) and glial fibrillary acidic protein (GFAP) levels altered in psychiatric disorders? FindingsIn this cohort study including 47,495 individuals, normative modeling revealed that plasma NfL levels were elevated in bipolar and depressive disorders, whereas plasma GFAP levels were not elevated in any psychiatric disorder. Plasma NfL levels also showed higher variability in schizophrenia spectrum, bipolar, and depressive disorders. MeaningPlasma NfL shows distinct alterations in schizophrenia spectrum and affective disorders, supporting its further investigation as a biomarker in clinical psychiatry and highlighting the need to consider psychiatric comorbidity in neurological applications.
Xu, M.; Philips, R.; Singavarapu, A.; Zheng, M.; Martin, D.; Nikolin, S.; Mutz, J.; Becker, A.; Firenze, R.; Tsai, L.-H.
Show abstract
Background: Gamma oscillation dysfunction has been implicated in neuropsychiatric disorders. Restoring gamma oscillations via brain stimulation represents an emerging therapeutic approach. However, the strength of its clinical effects and treatment moderators remain unclear. Method: We conducted a systematic review and meta-analysis to examine the clinical effects of gamma neuromodulation in neuropsychiatric disorders. A literature search for controlled trials using gamma stimulation was performed across five databases up until April 2025. Effect sizes were calculated using Hedge's g. Separate analyses using the random-effects model examined the clinical effects in schizophrenia (SZ), major depressive disorder (MDD), bipolar disorder, and autism spectrum disorder. For SZ and MDD, subgroup analyses evaluated the effects of stimulation modality, stimulation frequency, treatment duration, and pulses per session. Result: Fifty-six studies met the inclusion criteria (NSZ = 943, NMDD = 916, NBD = 175, NASD = 232). In SZ, gamma stimulation was associated with improvements in positive (k = 10, g = -0.60, p < 0.001), negative (k = 12, g = -0.37, p = 0.03), depressive (k = 8, g = -0.39, p < 0.001), anxious symptoms (k = 5, g = -0.59, p < 0.001), and overall cognitive function (k = 7, g = 0.55, p < 0.001). Stimulation frequency and treatment duration moderated therapeutic effects. In MDD, reductions in depressive symptoms were observed (k = 23, g = -0.34, p = 0.007). Conclusion: Gamma neuromodulation showed moderate therapeutic benefits in SZ and MDD. Substantial heterogeneity likely reflects protocol differences, highlighting the need for well-powered future trials.
Quide, Y.; Lim, T. E.; Gustin, S. M.
Show abstract
BackgroundEarly-life adversity (ELA) is a risk factor for enduring pain in youth and is associated with alterations in brain morphology and function. However, it remains unclear whether ELA-related neurobiological changes contribute to the development of enduring pain in early adolescence. MethodsUsing data from the Adolescent Brain Cognitive Development (ABCD) Study, we examined multimodal magnetic resonance imaging (MRI) markers in children assessed at baseline (ages 9-11 years) and at 2-year follow-up (ages 11-13 years). ELA exposure was defined at baseline to maximise temporal separation between early adversity and later enduring pain. Participants with enduring pain at follow-up (n = 322) were compared to matched pain-free controls (n = 644). Structural MRI, diffusion MRI (fractional anisotropy, mean diffusivity), and resting-state functional connectivity data were analysed. Linear models tested main effects of enduring pain, ELA, and their interaction on brain metrics, controlling for relevant covariates. ResultsELA exposure was associated with smaller caudate and nucleus accumbens volumes, and reduced surface area of the left rostral middle frontal gyrus. No significant effects of enduring pain or ELA-by-enduring pain interaction were observed across grey matter, white matter, or functional connectivity measures. ConclusionsELA was associated with alterations in fronto-striatal regions in late childhood, but these changes were not linked to enduring pain in early adolescence. These findings suggest that ELA-related neurobiological alterations may represent early markers of vulnerability rather than concurrent correlates of enduring pain. Longitudinal follow-up is needed to determine whether these alterations contribute to later chronic pain risk.
Spann, D. J.; Hall, L. M.; Moussa-Tooks, A.; Sheffield, J. M.
Show abstract
BackgroundNegative symptoms are core features of schizophrenia that relate strongly to functional impairment, yet interventions targeting these symptoms remain largely ineffective. Emerging theoretical work highlights how environmental factors may shape and maintain negative symptoms. Although racial disparities in schizophrenia diagnosis among Black Americans are well documented and linked to racial stress and psychosis, the impact of racial stress on negative symptoms has not been examined. This study provides an initial test of a novel theory proposing that racial stress - here measured by racial discrimination - influences negative symptom severity through exacerbation of negative cognitions about the self, particularly defeatist performance beliefs (DPB). Study DesignParticipants diagnosed with schizophrenia-spectrum disorder (SSD) (N = 208; 80 Black, 128 White) completed the Positive and Negative Syndrome Scale (PANSS), the Defeatist Beliefs Scale, and self-report measures of subjective racial and ethnic discrimination (Racial and Ethnic Minority Scale and General Ethnic Discrimination Scale). Relationships among variables were tested using linear regression and mediation analysis. Study ResultsBlack participants exhibited significantly greater total and experiential negative symptoms than White participants with no group difference in DPB. Racial discrimination explained 46% of the relationship between race and negative symptoms. Among Black participants, higher DPB were associated with greater negative symptom severity. Discrimination was positively related to both DPB and negative symptoms. DPB partially mediated the relationship between discrimination and negative symptoms. ConclusionsFindings suggest that racial stress contributes to negative symptom severity via defeatist beliefs among Black individuals, highlighting potential targets for culturally informed interventions.
Xu, J.; Parker, R. M. A.; Bowman, K.; Clayton, G. L.; Lawlor, D. A.
Show abstract
Background Higher levels of sedentary behaviour, such as leisure screen time (LST), and lower levels of physical activity are associated with diseases across multiple body systems which contribute to a large global health burden. Whether these associations are causal is unclear. The primary aim of this study is to investigate the causal effects of higher LST (given greater power) and, secondarily, lower moderate-to-vigorous intensity physical activity (MVPA), on a wide range of diseases in a hypothesis-free approach. Methods A two-sample Mendelian randomisation phenome-wide association study was conducted for the main analyses. Genetic single nucleotide polymorphisms (SNPs) were first selected as exposure genetic instruments for LST (hours of television watched per day; 117 SNPs) and MVPA (higher vs. lower; 18 SNPs) based on the genome-wide significant threshold (p < 5*10-8) from the largest relevant genome-wide association study (GWAS). For disease outcomes, we used summary results from FinnGen GWAS, including 1,719 diseases defined by hospital discharge International Classification of Diseases (ICD) codes in 453,733 European participants. For the main analyses, we used the inverse-variance weighting method with a Bonferroni corrected p-value of p [≤] 3.47*10-4. Sensitivity analyses included Steiger filtering, MR-Egger and weighted median analyses, and data from UK Biobank were used to explore replication. Findings Genetically predicted higher LST was associated with increased risk of 87 (5.1% of the 1,719) diseases. Most of these diseases were in musculoskeletal and connective tissue (n=37), genitourinary (n=12) and respiratory (n=8) systems. Genetic liability to lower MVPA was associated with six diseases: three in musculoskeletal and connective tissue and genitourinary systems (with greater risk of these diseases also identified with higher LST), and three in respiratory and genitourinary systems. Sensitivity analyses largely supported the main analyses. Results replicated in UK Biobank, where data available. Conclusions Higher levels of sedentary behaviour, and lower levels of physical activity, causally increase the risk of diseases across multiple body systems, making them promising targets for reducing multimorbidity.
Pietilainen, O.; Salonsalmi, A.; Rahkonen, O.; Lahelma, E.; Lallukka, T.
Show abstract
Objectives: Longer lifespans lead to longer time on retirement, despite the efforts to raise the retirement age. Therefore, it is important to study how the retirement years can be spent without diseases. This study examined socioeconomic and sociodemographic differences in healthy years spent on retirement. Methods: We followed a cohort of retired Finnish municipal employees (N=4231, average follow-up 15.4 years) on national administrative registers for major chronic diseases: cancer, coronary heart disease, cerebrovascular disease, diabetes, asthma or chronic obstructive pulmonary disease, dementia, mental disorders, and alcohol-related disorders. Median healthy years on retirement and age at first occurrence of illness (ICD-10 and ATC-based) in each combination of sex, occupational class, and age of retirement were predicted using Royston-Parmar models. Prevalence rates for each diagnostic group were calculated. Results: Most healthy years on retirement were spent by women having worked in semi-professional jobs who retired at age 60-62 (median predicted healthy years 11.6, 95% CI 10.4-12.7). The least healthy years on retirement were spent by men having worked in routine non-manual jobs who retired after age 62 (median predicted healthy years 6.5, 95% CI 4.4-9.5). Diabetes was slightly more common among lower occupational class women, and dementia among manual working women having retired at age 60-62. Discussion: Healthy years on retirement are not enjoyed equally by women and men and those who retire early or later. Policies aiming to increase the retirement age should consider the effects of these gaps on retirees and the equitability of those effects.
Hung, J.; Smith, A.
Show abstract
The global ambition to end the human immunodeficiency virus (HIV) epidemic requires understanding which system-level policy levers, enacted under the framework of Universal Health Coverage (UHC), are most effective in achieving both transmission reduction and diagnostic coverage. This study addresses an important evidence gap by quantifying the within-country association between measurable UHC policy indicators and the estimated rate of new HIV infections across nine Southeast Asian countries between 2013 and 2022. Employing a Fixed-Effects panel data methodology, the analysis controls for time-invariant national heterogeneity, ensuring reliable estimates of policy impact. We found that marginal changes in total current health expenditure (CHE) as a percentage of gross domestic product (GDP) were not statistically significantly associated with changes in HIV incidence. However, increases in the UHC Infectious Disease Service Coverage Index were statistically significantly associated with concurrent reductions in HIV incidence (p < 0.001), suggesting the efficacy of targeted service implementation as the principal driver of curbing new HIV infections. In addition, the UHC Reproductive, Maternal, Newborn, and Child Health Service Coverage Index exhibited a statistically significant positive association with changes in HIV incidence (p < 0.01), which is interpreted as a vital surveillance artefact resulting from expanded detection and reporting of previously undiagnosed HIV cases. Furthermore, out-of-pocket (OOP) health expenditure as a percentage of CHE showed a counter-intuitive negative association with changes in HIV incidence (p < 0.01), suggesting this metric primarily shows ongoing indirect cost burdens on the established patient cohort, or, alternatively, presents a diagnostic access barrier that results in lower case finding. These findings suggest that policymakers should prioritise investment in targeted infectious disease service efficacy over aggregate fiscal commitment and utilise integrated sexual health platforms for strengthened HIV surveillance and case identification.
Hassan, S. S.; Nordqvist-Kleppe, S.; Asinger, N.; Wang, J.; Dillner, J.; Arroyo Muhr, L. S.
Show abstract
Human papillomavirus (HPV) testing is the primary method for cervical cancer screening, and a negative HPV test is associated with a very low subsequent risk of invasive cancer. Nevertheless, a small number of cervical cancers are diagnosed following an HPV-negative testing result, posing challenges within HPV-based screening pathways. Using nationwide Swedish registry data of HPV testing, we identified women diagnosed with invasive cervical cancer between 2019 and 2024 and reconstructed HPV testing histories from the National Cervical Screening Registry (NKCx). The most recent HPV test prior to diagnosis was defined as the index test, and longitudinal HPV testing trajectories were classified among women with an HPV-negative index test. Of 3,000 women diagnosed with invasive cancer, 243 (8.1%) had an HPV-negative index test. These women were older at diagnosis and more frequently diagnosed at advanced stages compared with women with an HPV-positive index test. Most HPV-negative index tests (66.3%) were performed in the peri-diagnostic period (+/- 30 days). Among women with an HPV-negative index test, 52.7% (128/243) had no prior HPV testing recorded, while the remainder had consistently HPV-negative histories (33.3%, 83/243) or evidence of prior HPV positivity before the index negative test (14%, 32/243). Possible recurrent HPV positivity following an intervening negative test was rare (0.4%, 1/243). HPV-negative screening results preceding invasive cancer reflect heterogeneous screening histories and cannot be explained solely by test failure. Findings highlighting the importance of reaching women earlier in screening programs and show that fluctuating HPV detectability is rare.
Xiao, M.; Girard, Q.; Pender, M.; Rabezara, J. Y.; Rahary, P.; Randrianarisoa, S.; Rasambainarivo, F.; Rasolofoniaina, O.; Soarimalala, V.; Janko, M. M.; Nunn, C. L.
Show abstract
PurposeAntibiotic use (ABU) is a major driver of antimicrobial resistance (AMR), but ABU patterns are poorly understood in low-income countries where the burden of AMR is great and ABU is insufficiently regulated. Here, we report ABU from ten sites ranging from rural villages to small cities in Madagascar, a country with high AMR levels, and present results from modeling to identify factors that may be associated with ABU in this setting. MethodsWe conducted surveys of 290 individuals from ten sites in the SAVA Region of northeast Madagascar to gather data on sociodemographic characteristics, agricultural and animal husbandry practices, recent antibiotic use, the antibiotics that participants recalled using in their lifetimes, and the sources of their antibiotics. Using these data, we conducted statistical analyses with a mixed-effects logistic model to determine which characteristics were associated with recent antibiotic use. ResultsNearly all respondents (N=283, 97.6%) reported ABU in their lifetimes, with amoxicillin being the most widely reported antibiotic (N=255, 90.1% of those reporting ABU). All recalled antibiotics were classified as frontline drugs except for ciprofloxacin. Most respondents who reported antibiotic use also reported obtaining antibiotics without prescriptions from local stores (N=273, 96.5%), while only 52.3% (N=148) reported obtaining antibiotics through a prescriptive route, such as from a health clinic or private doctor. Of the 127 individuals (44.9%) who reported recent ABU, men were found to be significantly less likely to have recently taken antibiotics than women. ConclusionsOur findings provide new insights into ABU in agricultural settings in low-income countries, which have historically been understudied in AMR and pharmacoepidemiologic research. Knowledge of ABU patterns supports understanding of AMR dynamics and AMR control efforts in these contexts, such as interventions on inappropriate antibiotic dispensing. Key pointsO_LIAntibiotic use (ABU) in Madagascar is largely unstudied despite its role in antimicrobial resistance (AMR), which Madagascar faces a high burden of. C_LIO_LIABU was widespread among livestock owners in northeast Madagascar, with the majority of study participants reporting ABU in their lifetimes and most people reporting ABU also having taken antibiotics in the previous three months. C_LIO_LIMost respondents reported obtaining their antibiotics from non-pharmaceutical stores, indicating high levels of unregulated ABU, though more than half also reported sourcing their antibiotics through prescriptive means (like doctors and health clinics). C_LIO_LIMen were less likely than women to have taken antibiotics in the previous three months. C_LIO_LIThese findings support the development of interventions to mitigate the burden of AMR in Madagascar and similar contexts while underscoring the need for more comprehensive research on the drivers and patterns of ABU. C_LI Plain language summaryIn this study, we provide basic information on antibiotic use (ABU) patterns in Madagascar, a country that experiences high levels of resistance but has been particularly understudied in AMR and pharmacological research. We surveyed 290 farmers with livestock from ten sites across northeast Madagascar about their ABU and found that nearly all study participants (N=283, 97.6%) have used antibiotics in their lifetimes, while a little under half of those who reported ABU also reported using antibiotics in the previous three months (N=127, 44.9%). The most used antibiotic was amoxicillin (N=255, 90.1%). Most people obtained their antibiotics from sources that do not require prescriptions, like general stores, indicating that most ABU is unregulated. Through modeling, we also found that men were less likely than women to have taken antibiotics in the previous three months (OR=0.50, CI 0.30-0.82). These findings help us better understand the dynamics of ABU in low-income countries, which have historically been understudied in AMR and pharmacological research. They also support efforts to mitigate the burden of AMR by revealing ABU dynamics that may contribute to the emergence and spread of AMR, as well as identifying targets for intervention to curb inappropriate ABU.
Shaetonhodi, N. G.; De Vos, L.; Babalola, C.; de Voux, A.; Joseph Davey, D.; Mdingi, M.; Peters, R. P. H.; Klausner, J. D.; Medina-Marino, A.
Show abstract
BackgroundCurable sexually transmitted infections (STIs), including Chlamydia trachomatis, Neisseria gonorrhoeae, and Trichomonas vaginalis, remain highly prevalent among pregnant women in South Africa. Despite poor diagnostic performance in pregnancy, syndromic management remains standard care. Point-of-care (POC) screening enables aetiological diagnosis and same-visit treatment but is not yet included in national guidelines. We conducted a mixed-methods process evaluation to examine determinants of antenatal POC STI screening implementation in public facilities. MethodsThis evaluation was embedded within the three-arm Philani Ndiphile randomized trial (March 2021-February 2025) across four public clinics in the Eastern Cape. Screening used a near-POC, electricity-dependent nucleic acid amplification test with a 90-minute turnaround time. Reach, Adoption, Implementation, and Maintenance were assessed using the RE-AIM framework. Quantitative indicators included uptake of screening, treatment, and follow-up attendance. Qualitative data included in-depth interviews with 20 pregnant women and five focus group discussions with 21 research staff and government healthcare workers. The Consolidated Framework for Implementation Research guided qualitative analysis. Findings were integrated using narrative weaving. ResultsScreening uptake was high (99.0%), with treatment coverage of 95.2% at baseline and 93.5% at repeat screening. Same-day treatment was lower (50.7% and 69.8%) and varied substantially by facility, reflecting operational constraints including turnaround time, patient volume, infrastructure, and electricity. Attendance was higher when screening was integrated into routine ANC. Women valued screening for infant health, while providers recognised advantages over syndromic management but highlighted workforce, resource, and maintenance constraints. Socioeconomic factors, including transport costs, hunger, and work commitments, influenced retention and waiting. ConclusionsAntenatal POC STI screening was acceptable and achieved high treatment coverage in a research setting. However, same-day treatment was constrained by operational requirements of the testing platform. Scale-up will require workflow integration, strengthened health system capacity, and faster diagnostics suited to routine antenatal care. Key MessagesO_ST_ABSWhat is already known on this topicC_ST_ABSSyndromic management remains standard antenatal care in many low-resource settings despite failing to capture up to 89% of infections that remain asymptomatic. Point-of-care aetiological screening has demonstrated feasibility, acceptability, and potential clinical benefit in research settings, yet has not been widely adopted into national policy. Limited evidence exists on the health system requirements and contextual determinants influencing scale-up within routine public facilities. What this study addsThis mixed-methods process evaluation demonstrates high uptake and treatment coverage of antenatal POC STI screening in a trial setting, while identifying facility-level, structural, and socioeconomic factors shaping same-day treatment and retention. We show that implementation success varies substantially across clinics and depends on assay characteristics, workflow integration, human resources, infrastructure reliability, and follow-up capacity. How this study might affect research, practice or policyThese findings provide implementation-relevant evidence to inform national policy deliberations on integrating POC STI screening into antenatal care. Sustainable scale-up will require context-adapted delivery models, strengthened workforce and supply systems, faster diagnostics, and alignment with existing ANC workflows to ensure equitable and durable impact.
Areb, M.; Huybregts, L.; Tamiru, D.; Toure, M.; Biru, B.; Fall, T.; Haddis, A.; Belachew, T.
Show abstract
BackgroundThis study aimed to assess caregiver knowledge of Infant and Young Child Feeding (IYCF), child health, severe acute malnutrition (SAM) screening, and Community-Based Management of Acute Malnutrition (CMAM), its determinants, and associations with IYCF/ WaSH (water, sanitation, and hygiene) practices among caregivers of children 6-59 months with SAM in Ethiopian agrarian and pastoralist settings. MethodData were from the baseline survey of the R-SWITCH Ethiopia cluster-randomized controlled trial (cRCT), which screened [~]28,000 children aged 6-59 months and identified 686 SAM cases. Caregiver knowledge was evaluated using a validated 32-item questionnaire (Cronbachs for internal reliability) and analyzed via linear mixed-effects and Poisson regression models in Stata 17. ResultsCaregiver knowledge was positively associated with improved IYCF/WaSH practices among children aged 6-23 months with SAM, including higher minimum dietary diversity (MDD: IRR=1.50), minimum acceptable diet (MAD: IRR=1.63), and reduced zero vegetable/fruit intake (IRR=0.77), as well as MDD in children aged 24-59 months, improved water access (IRR=1.19), water treatment (IRR=2.02), and handwashing stations (IRR=1.41). Literate ({beta} = 4.1; 95% CI:1.5-6.6, p= 0.016), pregnant({beta} = 4.4; 95% CI:0.9-7.8, 0.018), having child weighing at a health post/ health center ({beta} = 8.9;95% CI:3.5-14.2,p [≤] 0.001), and higher household wealth index ({beta} = 11.8;95% CI:3.6-20.1,p= 0.005) were associated with higher knowledge, while possible depression ({beta} = -0.3;95% CI: -0.5 to 0.0, p= 0.015) was associated with lower knowledge. ConclusionCaregiver knowledge determines better IYCF/WaSH practices among children aged 6-59 months with SAM. Literacy, pregnancy, having child weighing at a health post or health center, and greater household wealth were associated with caregivers knowledge, whereas possible depression was associated with lower knowledge. Integrating context-specific caregiver education and mental health support into CMAM, GMP(Growth monitoring and promotion), and primary care services could enhance feeding/WaSH practices in Ethiopia.
Heffernan, P. M.; van den Berg, H.; Yadav, R. S.; Murdock, C. C.; Rohr, J. R.
Show abstract
BackgroundInsecticides remain the cornerstone of mosquito vector control for malaria, dengue, and other mosquito-borne diseases, yet global patterns of deployment and their socioeconomic and environmental drivers are poorly characterized. Understanding where and why insecticides are used is essential for better targeting control efforts and ensuring they are effective, equitable, and efficient. MethodsWe analyzed annual country-level insecticide-use data from 122 countries (1990-2019), reported as standard spray coverage for insecticide-treated nets (ITNs), residual spraying (RS), spatial spraying (SS), and larviciding (LA). Generalized linear mixed models and hurdle models quantified associations between deployment and disease incidence, human development index (HDI), human population density, temperature, and precipitation. Models were evaluated using repeated cross-validation and applied to generate downscaled predictions of insecticide use at subnational administrative region level 2 (ADM2) globally. FindingsInsecticide deployment increased with malaria and dengue incidence, but this response was substantially stronger in higher-HDI countries, indicating that deployment depends on socioeconomic capacity as well as disease burden that leads to weaker scaling in lower-resource settings. Intervention types exhibited distinct patterns; ITN use tracked malaria burden, whereas infrastructure-intensive approaches (e.g., RS and SS) were concentrated in higher-HDI settings and increased with Aedes-borne disease incidence. Downscaled ADM2-level maps uncovered substantial within-country heterogeneity that is obscured at the national scale, highlighting regions where predicted deployment remains low relative to disease risk across sub-Saharan Africa, South Asia, and parts of Latin America. InterpretationGlobal insecticide deployment reflects not only epidemiological need but also economic and logistical capacity, creating mismatches between risk and control. High-resolution mapping can support more equitable allocation of interventions, guide insecticide resistance stewardship, and improve strategic planning as climate and urbanization reshape mosquito-borne disease risk.
Maneraguha, F. K.; Cote, J.; Bourbonnais, A.; Arbour, C.; Chagnon, M.; Hatem, M.
Show abstract
Background Comprehensive sexuality education (CSE) is essential to the health and well-being of young people. In the Democratic Republic of Congo (DRC), where more than 65% of the population is under the age of 25, access to interpersonal CSE remains limited owing to sociocultural and structural barriers. This exposes young people to persistent socio-sanitary vulnerabilities. In this context, mobile health apps (MHAs) constitute a promising solution, supported by the growing use of smartphones among young Congolese. However, this group's intention to use MHAs for CSE has been the subject of little research to date. Objective The aim of this study was to identify predictors of intention to use MHAs among young Congolese, based on the extended Unified Theory of Acceptance and Use of Technology (UTAUT2). Methods A predictive correlational study was conducted in eight public secondary schools in Bukavu (DRC) with a stratified random sample of 859 students. Predictors of intention to use--performance expectancy (PE), effort expectancy (EE), social influence (SI), facilitating conditions (FC), and perceived risk (PR)--and moderators--age, gender, and past MHA experience--were measured from data collected through a self-administered UTAUT questionnaire. Descriptive and multivariate analyses were run on SPSS version 28. Results Mean age of participants was 16.3 years (SD = 1.5). Boys made up 55.1% of the sample. Overall, 51.0% of the sample owned a smartphone, of which 62.3% reported having easy access to mobile data and 16.2% were already using MHAs to learn about sexual health. Intention to use MHAs was positively influenced by PE ({beta} = 0.523, p < 0.001), EE ({beta} = 0.115, p < 0.001), and SI ({beta} = 0.113, p < 0.001). FC (p = 0.260) and PR (p = 0.631), however, had no significant influence. Age moderated all of the relationships tested (F (1, 849-854) = 9.97-20.82; p [≤] 0.002), with more marked effects observed among younger participants 14-15 years old. The final model explained 44% of the variance, indicating good predictive power. Conclusion Intention to use digital CSE was explained primarily by PE, EE, and SI and moderated by age. To strengthen this intention, stakeholders will need to promote e-interventions that are pertinent, easy to use, socially valorized, and tailored to young people's needs and to the local context.